Bayesian Inference offers principled tools to tackle many critical problems with modern neural networks such as poor calibration and generalization, and data inefficiency. However, scaling Bayesian inference to large architectures is challenging and requires restrictive approximations. Monte Carlo Dropout has been widely used as a relatively cheap way for approximate Inference and to estimate uncertainty with deep neural networks. Traditionally, the dropout mask is sampled independently from a fixed distribution. Recent works show that the dropout mask can be viewed as a latent variable, which can be inferred with variational inference. These methods face two important challenges: (a) the posterior distribution over masks can be highly multi-modal which can be difficult to approximate with standard variational inference and (b) it is not trivial to fully utilize sample-dependent information and correlation among dropout masks to improve posterior estimation. In this work, we propose GFlowOut to address these issues. GFlowOut leverages the recently proposed probabilistic framework of Generative Flow Networks (GFlowNets) to learn the posterior distribution over dropout masks. We empirically demonstrate that GFlowOut results in predictive distributions that generalize better to out-of-distribution data, and provide uncertainty estimates which lead to better performance in downstream tasks.
translated by 谷歌翻译
我们的目标是评估汽车系统是否更改(即搜索空间或超参数优化)将改善最终模型在生产任务上的性能。但是,我们无法测试生产任务的更改。取而代之的是,我们只能访问有关AutoML系统先前执行的任务的有限描述符,例如数据点或功能的数量。我们还拥有一组开发任务来测试更改,例如,从OpenML取样,没有使用限制。但是,开发和生产任务分布不同,导致我们追求只能改善发展而不是生产的变化。本文提出了一种利用有关汽车生产任务的描述符信息的方法,以选择最相关开发任务的过滤子集。实证研究表明,我们的过滤策略提高了评估与开发不同分布不同的保留任务变更的能力。
translated by 谷歌翻译
深度卷积神经网络(CNN)最近已达到最先进的手写文本识别(HTR)性能。但是,最近的研究表明,典型的CNN的学习性能是有限的,因为它们是具有简单(线性)神经元模型的同质网络。由于它们的异质网络结构结合了非线性神经元,最近提出了操作神经网络(ONNS)来解决这一缺点。自我结合是具有生成神经元模型的ONN的自组织变化,可以使用泰勒近似来生成任何非线性函数。在这项研究中,为了提高HTR的最新性能水平,提出了新型网络模型核心中的2D自组织(自我强调)。此外,本研究中使用了可变形的卷积,最近被证明可以更好地解决写作风格的变化。 IAM英语数据集和Hadara80p阿拉伯数据集中的结果表明,具有自我影响的操作层的拟议模型显着提高了字符错误率(CER)和单词错误率(WER)。与同行CNN相比,Hadara80p中的自我强调将CER和3.4%降低,在IAM数据集中,自我强调将CER降低1.2%和3.4%,为0.199%和1.244%。基准IAM上的结果表明,与自相互紧缩的操作层的拟议模型通过显着的边缘优于最近的深CNN模型,而使用具有可变形卷积的自我冲突表明了出色的结果。
translated by 谷歌翻译